A Bayes Rule for Density Matrices
نویسنده
چکیده
The classical Bayes rule computes the posterior model probability from the prior probability and the data likelihood. We generalize this rule to the case when the prior is a density matrix (symmetric positive definite and trace one) and the data likelihood a covariance matrix. The classical Bayes rule is retained as the special case when the matrices are diagonal. In the classical setting, the calculation of the probability of the data is an expected likelihood, where the expectation is over the prior distribution. In the generalized setting, this is replaced by an expected variance calculation where the variance is computed along the eigenvectors of the prior density matrix and the expectation is over the eigenvalues of the density matrix (which form a probability vector). The variances along any direction is determined by the covariance matrix. Curiously enough this expected variance calculation is a quantum measurement where the co-variance matrix specifies the instrument and the prior density matrix the mixture state of the particle. We motivate both the classical and the generalized Bayes rule with a minimum relative entropy principle, where the Kullbach-Leibler version gives the classical Bayes rule and Umegaki’s quantum relative entropy the new Bayes rule for density matrices.
منابع مشابه
A survey of Bayesian Data Mining - Part I: Discrete and semi-discrete Data Matrices
This tutorial summarises the use of Bayesian analysis and Bayes factors for nding signi cant properties of discrete (categorical and ordinal) data. It overviews methods for nding dependencies and graphical models, latent variables, robust decision trees and association rules.
متن کاملمقایسه روش های مختلف آماری در انتخاب ژنومی گاوهای هلشتاین
Genomic selection combines statistical methods with genomic data to predict genetic values for complex traits. The accuracy of prediction of genetic values in selected population has a great effect on the success of this selection method. Accuracy of genomic prediction is highly dependent on the statistical model used to estimate marker effects in reference population. Various factors such a...
متن کاملBAYES ESTIMATION USING A LINEX LOSS FUNCTION
This paper considers estimation of normal mean ? when the variance is unknown, using the LINEX loss function. The unique Bayes estimate of ? is obtained when the precision parameter has an Inverse Gaussian prior density
متن کاملVariational Bayes Approximation
which is sometimes referred to as the product rule. As we well know, our primary interest lies in the factor pZ|X(z|x). The rest of this lecture is concerned with characterizing this density using the method of variational Bayes (VB) approximation. This is the second example of a deterministic scheme in approximating the conditional density for inferential procedures (the first being Laplace ap...
متن کاملAdmissible Predictive Density Estimation
Let X|μ ∼ Np(μ,vxI ) and Y |μ ∼ Np(μ,vyI ) be independent pdimensional multivariate normal vectors with common unknown mean μ. Based on observing X = x, we consider the problem of estimating the true predictive density p(y|μ) of Y under expected Kullback–Leibler loss. Our focus here is the characterization of admissible procedures for this problem. We show that the class of all generalized Baye...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2005